12,214 research outputs found

    Universal in vivo Textural Model for Human Skin based on Optical Coherence Tomograms

    Full text link
    Currently, diagnosis of skin diseases is based primarily on visual pattern recognition skills and expertise of the physician observing the lesion. Even though dermatologists are trained to recognize patterns of morphology, it is still a subjective visual assessment. Tools for automated pattern recognition can provide objective information to support clinical decision-making. Noninvasive skin imaging techniques provide complementary information to the clinician. In recent years, optical coherence tomography has become a powerful skin imaging technique. According to specific functional needs, skin architecture varies across different parts of the body, as do the textural characteristics in OCT images. There is, therefore, a critical need to systematically analyze OCT images from different body sites, to identify their significant qualitative and quantitative differences. Sixty-three optical and textural features extracted from OCT images of healthy and diseased skin are analyzed and in conjunction with decision-theoretic approaches used to create computational models of the diseases. We demonstrate that these models provide objective information to the clinician to assist in the diagnosis of abnormalities of cutaneous microstructure, and hence, aid in the determination of treatment. Specifically, we demonstrate the performance of this methodology on differentiating basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) from healthy tissue

    Supervised machine learning based multi-task artificial intelligence classification of retinopathies

    Full text link
    Artificial intelligence (AI) classification holds promise as a novel and affordable screening tool for clinical management of ocular diseases. Rural and underserved areas, which suffer from lack of access to experienced ophthalmologists may particularly benefit from this technology. Quantitative optical coherence tomography angiography (OCTA) imaging provides excellent capability to identify subtle vascular distortions, which are useful for classifying retinovascular diseases. However, application of AI for differentiation and classification of multiple eye diseases is not yet established. In this study, we demonstrate supervised machine learning based multi-task OCTA classification. We sought 1) to differentiate normal from diseased ocular conditions, 2) to differentiate different ocular disease conditions from each other, and 3) to stage the severity of each ocular condition. Quantitative OCTA features, including blood vessel tortuosity (BVT), blood vascular caliber (BVC), vessel perimeter index (VPI), blood vessel density (BVD), foveal avascular zone (FAZ) area (FAZ-A), and FAZ contour irregularity (FAZ-CI) were fully automatically extracted from the OCTA images. A stepwise backward elimination approach was employed to identify sensitive OCTA features and optimal-feature-combinations for the multi-task classification. For proof-of-concept demonstration, diabetic retinopathy (DR) and sickle cell retinopathy (SCR) were used to validate the supervised machine leaning classifier. The presented AI classification methodology is applicable and can be readily extended to other ocular diseases, holding promise to enable a mass-screening platform for clinical deployment and telemedicine.Comment: Supplemental material attached at the en

    Associations with photoreceptor thickness measures in the UK Biobank.

    Get PDF
    Spectral-domain OCT (SD-OCT) provides high resolution images enabling identification of individual retinal layers. We included 32,923 participants aged 40-69 years old from UK Biobank. Questionnaires, physical examination, and eye examination including SD-OCT imaging were performed. SD OCT measured photoreceptor layer thickness includes photoreceptor layer thickness: inner nuclear layer-retinal pigment epithelium (INL-RPE) and the specific sublayers of the photoreceptor: inner nuclear layer-external limiting membrane (INL-ELM); external limiting membrane-inner segment outer segment (ELM-ISOS); and inner segment outer segment-retinal pigment epithelium (ISOS-RPE). In multivariate regression models, the total average INL-RPE was observed to be thinner in older aged, females, Black ethnicity, smokers, participants with higher systolic blood pressure, more negative refractive error, lower IOPcc and lower corneal hysteresis. The overall INL-ELM, ELM-ISOS and ISOS-RPE thickness was significantly associated with sex and race. Total average of INL-ELM thickness was additionally associated with age and refractive error, while ELM-ISOS was additionally associated with age, smoking status, SBP and refractive error; and ISOS-RPE was additionally associated with smoking status, IOPcc and corneal hysteresis. Hence, we found novel associations of ethnicity, smoking, systolic blood pressure, refraction, IOPcc and corneal hysteresis with photoreceptor thickness

    A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head

    Full text link
    Purpose: To develop a deep learning approach to de-noise optical coherence tomography (OCT) B-scans of the optic nerve head (ONH). Methods: Volume scans consisting of 97 horizontal B-scans were acquired through the center of the ONH using a commercial OCT device (Spectralis) for both eyes of 20 subjects. For each eye, single-frame (without signal averaging), and multi-frame (75x signal averaging) volume scans were obtained. A custom deep learning network was then designed and trained with 2,328 "clean B-scans" (multi-frame B-scans), and their corresponding "noisy B-scans" (clean B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance of the de-noising algorithm was assessed qualitatively, and quantitatively on 1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio (CNR), and mean structural similarity index metrics (MSSIM). Results: The proposed algorithm successfully denoised unseen single-frame OCT B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR increased from 4.02±0.684.02 \pm 0.68 dB (single-frame) to 8.14±1.038.14 \pm 1.03 dB (denoised). For all the ONH tissues, the mean CNR increased from 3.50±0.563.50 \pm 0.56 (single-frame) to 7.63±1.817.63 \pm 1.81 (denoised). The MSSIM increased from 0.13±0.020.13 \pm 0.02 (single frame) to 0.65±0.030.65 \pm 0.03 (denoised) when compared with the corresponding multi-frame B-scans. Conclusions: Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort

    Microperimetric evaluation in patients with adult-onset foveomacular vitelliform dystrophy

    Get PDF
    INTRODUCTION: To compare mean best-corrected visual acuity (BCVA), retinal sensitivity (RS), and bivariate contour ellipse area (BCEA) in patients with adult-onset foveomacular vitelliform dystrophy (AOFVD) and healthy subjects (HSs), reporting also functional disease-related changes in the different stages of the AOFVD disease. MATERIALS AND METHODS: In this observational cross-sectional study, a total of 19 patients (30 eyes; 12 female and 7 male) with AOFVD were enrolled, and 30 patients (30 eyes; 16 female and 14 male) were recruited as age-matched control group (74.36 ± 9.17 years vs. 71.83 ± 6.99 years respectively, P= 0.11). All patients underwent a complete ophthalmologic examination, fundus autofluorescence and fluorescein angiography, spectral-domain optical coherence tomography and microperimetry (MP)-1 analysis. The data collection included mean BCVA, mean RS measured by means of MP-1, BCEA, and central retinal thickness. RESULTS: All the functional parameters (BCVA, RS, and BCEA) were significantly worse in AOFVD group than HS. Subgroup analysis showed that the most significant functional changes, quantified by mean BCVA, RS, and BCEA, were in the atrophic stage (P = 0.03, P= 0.01, and P= 0.001, respectively). All the functional parameters were well correlated in the different stages. CONCLUSIONS: This study further confirms the good visual prognosis in the AOFVD eyes. Fixation stability measurement using BCEA demonstrates good evaluation of visual performance integrating traditional functional parameters. It may also serve for further rehabilitative purposes in atrophic eyes
    corecore